Introduction to Open Data Science - Course Project

About the project

I’m looking forward to taking part in Introduction to Open Data Science course. I hope to learn how to use open and free software like R to analyze data as well as to learn more about different statistical methods.

My GitHub repository

# This is a so-called "R chunk" where you can write R code.

date()
## [1] "Thu Dec 03 21:52:50 2020"

Exercise 2: Data analysis

date()
## [1] "Thu Dec 03 21:52:50 2020"

Data description

lrn14<-read.table("http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/learning2014.txt", , sep=",", header=TRUE)

dim(lrn14)
## [1] 166   7
str(lrn14)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : chr  "F" "M" "F" "M" ...
##  $ age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ points  : int  25 12 24 10 22 21 21 31 24 26 ...

The dataset lerning2014 includes 7 different variables and 166 observations. Observations represent students. Besides background variables gender and age, the dataset includes information how student scored in a exam (variable points) and their attidude towards statistics on a scale 1 to 5 (attitude). The variable deep is the average points on a scale 1 to 5 from questions concerning deep learning approaches. Variables stra and surf include information on average points from questions about strategic learning and surface learning approaches.

More information about the original dataset, that was used to form this dataset, can be found here

A graphical overview of the data and summary statistics

library(GGally)
## Loading required package: ggplot2
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
library(ggplot2)

p <- ggpairs(lrn14, mapping = aes(col =gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))

p

summary(lrn14)
##     gender               age           attitude          deep      
##  Length:166         Min.   :17.00   Min.   :1.400   Min.   :1.583  
##  Class :character   1st Qu.:21.00   1st Qu.:2.600   1st Qu.:3.333  
##  Mode  :character   Median :22.00   Median :3.200   Median :3.667  
##                     Mean   :25.51   Mean   :3.143   Mean   :3.680  
##                     3rd Qu.:27.00   3rd Qu.:3.700   3rd Qu.:4.083  
##                     Max.   :55.00   Max.   :5.000   Max.   :4.917  
##       stra            surf           points     
##  Min.   :1.250   Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.625   1st Qu.:2.417   1st Qu.:19.00  
##  Median :3.188   Median :2.833   Median :23.00  
##  Mean   :3.121   Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.625   3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :5.000   Max.   :4.333   Max.   :33.00

From the graph above we can see the distrubutions of different variables by gender. Below the graph there are summary statistics for different variables. The avereage age of study participants is 25.51 years. Majority of participants are women. Attitude variable has the strongest correlation with exam points and the scatter plot of these two variables also indicate that there may be some kind assocation between these variables.

Linear regression models

model1 <- lm(points ~ attitude + stra + surf, data = lrn14)

summary(model1)
## 
## Call:
## lm(formula = points ~ attitude + stra + surf, data = lrn14)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.0171     3.6837   2.991  0.00322 ** 
## attitude      3.3952     0.5741   5.913 1.93e-08 ***
## stra          0.8531     0.5416   1.575  0.11716    
## surf         -0.5861     0.8014  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08
model2 <- lm(points ~ attitude , data = lrn14)

summary(model2)
## 
## Call:
## lm(formula = points ~ attitude, data = lrn14)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.6372     1.8303   6.358 1.95e-09 ***
## attitude      3.5255     0.5674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

We choose variables _attitude, stra and surf as independents variables to fit a linear regression model where points i.e. exam points is the dependent variable. The chosen independent variables were the most correlated variables with the dependent variable. From Model 1 summary we can see that only attitude has a statistically significant coefficient (3.3952) as its p-value <0.0001. With low p-value we can reject the null hypothesis that the coefficent is zero. When we fit Model 2 without stra and surf the coefficent for attitude is still significant and its value (3.5255) is close to that in Model 1.

The multiple R-squared tells us how much our models explain the variation in points. Multiple R-squared are quite close to each other in both models (Model 1 0.2074 and Model 2 0.1906 ) so roughly both models explain approx. 20% of the varioation in points. In Model 2 with a single independent variable the miltiple R-squared is basically the square of correlation coefficient between the independent and the dependent variable.

Diagnostic plots

par(mfrow =c(2,2))
plot(model2, which = c(1,2,5) )

From the QQ plot we can see that the residuals reasonable well fall in with the line which would indicate that the assumption of normality of error terms are not violated.

From the plot where residuals and fitted values from the model are plotted we can see that there is no clear pattern. The size of error term does not depend on the independent variables which indicates that the constant variance of error term assumption is not violated.

The leverage plot helps us to estimate if there are influential outliers in our data that affect out estimation results. From our leverage plot we see that there are no influential outliers, the x-axis scale is very low.

Exercise 3: Logistic regression

date()
## [1] "Thu Dec 03 21:52:57 2020"

Data description

library(dplyr); library(ggplot2)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
setwd("~/R/IODS-project")
pormat<-read.csv("data/pormat.csv", header=T)

str(pormat)
## 'data.frame':    370 obs. of  35 variables:
##  $ school    : chr  "GP" "GP" "GP" "GP" ...
##  $ sex       : chr  "F" "F" "F" "F" ...
##  $ age       : int  15 15 15 15 15 15 15 15 15 15 ...
##  $ address   : chr  "R" "R" "R" "R" ...
##  $ famsize   : chr  "GT3" "GT3" "GT3" "GT3" ...
##  $ Pstatus   : chr  "T" "T" "T" "T" ...
##  $ Medu      : int  1 1 2 2 3 3 3 2 3 3 ...
##  $ Fedu      : int  1 1 2 4 3 4 4 2 1 3 ...
##  $ Mjob      : chr  "at_home" "other" "at_home" "services" ...
##  $ Fjob      : chr  "other" "other" "other" "health" ...
##  $ reason    : chr  "home" "reputation" "reputation" "course" ...
##  $ guardian  : chr  "mother" "mother" "mother" "mother" ...
##  $ traveltime: int  2 1 1 1 2 1 2 2 2 1 ...
##  $ studytime : int  4 2 1 3 3 3 3 2 4 4 ...
##  $ schoolsup : chr  "yes" "yes" "yes" "yes" ...
##  $ famsup    : chr  "yes" "yes" "yes" "yes" ...
##  $ activities: chr  "yes" "no" "yes" "yes" ...
##  $ nursery   : chr  "yes" "no" "yes" "yes" ...
##  $ higher    : chr  "yes" "yes" "yes" "yes" ...
##  $ internet  : chr  "yes" "yes" "no" "yes" ...
##  $ romantic  : chr  "no" "yes" "no" "no" ...
##  $ famrel    : int  3 3 4 4 4 4 4 4 4 4 ...
##  $ freetime  : int  1 3 3 3 2 3 2 1 4 3 ...
##  $ goout     : int  2 4 1 2 1 2 2 3 2 3 ...
##  $ Dalc      : int  1 2 1 1 2 1 2 1 2 1 ...
##  $ Walc      : int  1 4 1 1 3 1 2 3 3 1 ...
##  $ health    : int  1 5 2 5 3 5 5 4 3 4 ...
##  $ failures  : int  0 1 0 0 1 0 1 0 0 0 ...
##  $ paid      : chr  "yes" "no" "no" "no" ...
##  $ absences  : int  3 2 8 2 5 2 0 1 9 10 ...
##  $ G1        : int  10 10 14 10 12 12 11 10 16 10 ...
##  $ G2        : int  12 8 13 10 12 12 6 10 16 10 ...
##  $ G3        : int  12 8 12 9 12 12 6 10 16 10 ...
##  $ alc_use   : num  1 3 1 1 2.5 1 2 2 2.5 1 ...
##  $ high_use  : logi  FALSE TRUE FALSE FALSE TRUE FALSE ...

The data set pormat includes 35 different variables and 370 observations. The data set has been constructed by joining two data sets that include information on students from two Portuguese schools. The two original data sets included information on students performance in mathematics and Portuguese language. Variables G1-G3 are the grades and when data sets were joined the averages of grades of these two subjects were calculated to create grades variables in the joined data set. Also averages of the numbers absences and failures from these two subject were calculated when forming the joined data set. paid variable contains information if the student got paid extra classes in the subject, in the joined data set paid variable represent the answer to the question if the student got paid classes in Portuguese language. Other variables are background variables such as age, sex, parents’ level of education etc.

When the joined data set was constructed, two new variables were created alc_use which is the average of workday (Dalc) and weekend (Walc) alcohol consumption and high_use which is binary variable describing if the average alcohol consumption was higher than 2.

The two original datasets and the detail description of all variables in the original data sets are available here.

Variables that may be associated with alcohol consumption

Absences: Students who have a lot of absences from class may also have a high alcohol consumption as absences may reflect the fact that student skip school because they feel too hangover.

Failures: A high alcohol consumption may harm school performance and student who consume high amount alcohol may not pass exams as often as other students.

Sex: It is a quite common known fact that men consume more alcohol than women.

PStatus (parent’s cohabitation status): Students whose parents live apart may consume more alcohol because their single parents possibly have to work a lot to provide for the family and can’t spent time to supervise what children are doing or a high alcohol consumption may also be a child’s reaction to parents’ separation.

The distributions of the variables and their relationships with alcohol consumption

library(tidyr); library(dplyr); library(ggplot2); library(gmodels)

vars<-subset(pormat, select= c("failures","sex","absences", "Pstatus", "alc_use", "high_use"))

# draw a bar plot of each variable
gather(vars) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free")+geom_bar()

g1<- ggplot(pormat, aes(x=high_use, y=absences,))
g1 + geom_boxplot() + ylab("absences")+ggtitle("Student absences by alcohol consumption and sex")

table(high_use = pormat$high_use, failures = pormat$failures)
##         failures
## high_use   0   1   2   3
##    FALSE 238  12   8   1
##    TRUE   87  12   9   3
CrossTable(pormat$high_use, pormat$sex, prop.c=TRUE, prop.r=F, prop.chisq=F, prop.t=F)
## 
##  
##    Cell Contents
## |-------------------------|
## |                       N |
## |           N / Col Total |
## |-------------------------|
## 
##  
## Total Observations in Table:  370 
## 
##  
##                 | pormat$sex 
## pormat$high_use |         F |         M | Row Total | 
## ----------------|-----------|-----------|-----------|
##           FALSE |       154 |       105 |       259 | 
##                 |     0.790 |     0.600 |           | 
## ----------------|-----------|-----------|-----------|
##            TRUE |        41 |        70 |       111 | 
##                 |     0.210 |     0.400 |           | 
## ----------------|-----------|-----------|-----------|
##    Column Total |       195 |       175 |       370 | 
##                 |     0.527 |     0.473 |           | 
## ----------------|-----------|-----------|-----------|
## 
## 
CrossTable(pormat$high_use, pormat$Pstatus, prop.c=TRUE, prop.r=F, prop.chisq=F, prop.t=F)
## 
##  
##    Cell Contents
## |-------------------------|
## |                       N |
## |           N / Col Total |
## |-------------------------|
## 
##  
## Total Observations in Table:  370 
## 
##  
##                 | pormat$Pstatus 
## pormat$high_use |         A |         T | Row Total | 
## ----------------|-----------|-----------|-----------|
##           FALSE |        26 |       233 |       259 | 
##                 |     0.684 |     0.702 |           | 
## ----------------|-----------|-----------|-----------|
##            TRUE |        12 |        99 |       111 | 
##                 |     0.316 |     0.298 |           | 
## ----------------|-----------|-----------|-----------|
##    Column Total |        38 |       332 |       370 | 
##                 |     0.103 |     0.897 |           | 
## ----------------|-----------|-----------|-----------|
## 
## 

From the first plot we can see the distributions of our chosen variables. For example, most students’ parents live together and there are more women in our sample than men.

From the second graph we see that the median number of absences is higher among those who consume high amounts of alcohol and also limits for IQR are higher.

From the first table we see that there are more high consumers among those who have several failures in exams.

From the second table we see that 40% of men and 21% of women are have high alcohol consumption.

From the third table we see that 31% of students who parents live apart and 30% of students who parents live together are high consumers.

A logistic regression model

m1 <- glm(high_use ~ failures + absences + sex + Pstatus, data = pormat, family = "binomial")

# print out a summary of the model
summary(m1)
## 
## Call:
## glm(formula = high_use ~ failures + absences + sex + Pstatus, 
##     family = "binomial", data = pormat)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.1317  -0.8455  -0.5912   1.0272   2.0345  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -2.02222    0.43765  -4.621 3.83e-06 ***
## failures     0.59686    0.20720   2.881  0.00397 ** 
## absences     0.09301    0.02336   3.982 6.84e-05 ***
## sexM         0.99666    0.24726   4.031 5.56e-05 ***
## PstatusT     0.08764    0.40210   0.218  0.82746    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 452.04  on 369  degrees of freedom
## Residual deviance: 406.95  on 365  degrees of freedom
## AIC: 416.95
## 
## Number of Fisher Scoring iterations: 4
# print out the coefficients of the model
coef(m1)
## (Intercept)    failures    absences        sexM    PstatusT 
## -2.02221996  0.59686177  0.09301028  0.99666359  0.08764120
# compute odds ratios (OR)
OR <- coef(m1) %>% exp

# compute confidence intervals (CI)
CI<-confint(m1)%>%exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                    OR      2.5 %    97.5 %
## (Intercept) 0.1323613 0.05365585 0.3015602
## failures    1.8164095 1.21846564 2.7632268
## absences    1.0974730 1.05068550 1.1517051
## sexM        2.7092276 1.67857047 4.4338543
## PstatusT    1.0915964 0.50797217 2.4883339

From the summary of our model we can see that failures, absences, and sex variables have a statistically significant coefficients (p<0.001) but Psatus variable do not have a significant coefficient. From odds ratios we can see that men have 2.71 or according to confidence intervals from 1.68 to 4.43 times higher odds of high alcohol consumption than women. Number of failures and absences increase the odds of high alcohol consumption by 81.6% and 9.7% respectively for every one unit increase in the value of the variable. From the confidence interval of Pstatus we can make the same conclusion as from the p-value. The CI includes the value one which indicates that the OR is not statistically significant. To conclude failures, absences and sex seem to be associated with a high alcohol consumption as hypothezized previously but parent’s cohabitation status is not associated with a high alcohol consumption.

The predictive power of the model

m2 <- glm(high_use ~ failures + absences + sex , data = pormat, family = "binomial")

# predict() the probability of high_use
probabilities <- predict(m2, type = "response")

# add the predicted probabilities to data
pormat <- mutate(pormat, probability = probabilities)

# use the probabilities to make a prediction of high_use
pormat <- mutate(pormat, prediction = probability>0.5)

# tabulate the target variable versus the predictions
table(high_use = pormat$high_use, prediction = pormat$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   252    7
##    TRUE     78   33
table(high_use = pormat$high_use, prediction = pormat$prediction)%>%prop.table()%>%addmargins
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.68108108 0.01891892 0.70000000
##    TRUE  0.21081081 0.08918919 0.30000000
##    Sum   0.89189189 0.10810811 1.00000000
# plot the target variable versus the predictions
g <- ggplot(pormat, aes(x = high_use, y = probability))
g+geom_point()

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = pormat$high_use, prob = pormat$probability)
## [1] 0.2297297

As Pstatus was not statistically significant we leave it out from the final model. Our final model predicted 252 FALSE cases out of 259 right as for TRUE cases only 33 out of 111 was predicted right. The total proportion of inaccurately classified individuals was 23%.

10-Fold cross validation

library(boot)
cv <- cv.glm(data = pormat, cost = loss_func, glmfit = m2, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2567568

According the 10-fold cross validation out model has about 0.24 error. That is a bit smaller than in the DataCamp model although our final model is exactly the same. However our data has fewer observations so the comparisons is not so straightforward.


Exercise 4: Clustering and classification

date()
## [1] "Thu Dec 03 21:52:59 2020"
library(tidyr); library(dplyr); library(ggplot2); library(gmodels)

Data description

library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
# load the data
data("Boston")

# explore the dataset
dim(Boston)
## [1] 506  14
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...

The Boston data set has 506 observations and 14 variables. The data include information on housing values in suburbs of Boston. Full list of variables and descriptions of variables are available here

Overview of the data

pairs(Boston)

pairs(Boston[c("crim","age","dis","rad","tax","black","medv")])

summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

From the graphs we can see the distributions of variables and their relationship with other variables. When we take a closer look some selected variables, we see that for example crime rates are higher in areas where there are larger proportions of older houses or in areas where the mean of distances to employment centers is shorter.

Standartizatoin of data, creating test and train data sets

# center and standardize variables
boston_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865
# change the object to data frame
boston_scaled<-as.data.frame(boston_scaled)

# create a quantile vector of crim 
bins <- quantile(boston_scaled$crim)

# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label=c("low","med_low","med_high","high"))

# look at the table of the new factor crime
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

# remove the crime variable from the data set
boston_scaled <- dplyr::select(boston_scaled, -crim)

# number of rows in the Boston dataset 
n <- nrow(boston_scaled)


# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]

# create test set 
test <- boston_scaled[-ind,]

After standardization, we can see from the summary table that all variables now have the mean zero and the minimum and maximum values of all scaled variables varies in much smaller intervals than in the original data.

After creating a categorical variable for crime rates, we can see from the table that observations are quite equally distributed in four categories.

Linear discriminant analysis

# linear discriminant analysis
lda.fit <- lda(crime~., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2574257 0.2475248 0.2549505 0.2400990 
## 
## Group means:
##                  zn      indus        chas        nox         rm        age
## low       1.0612841 -0.9450554 -0.08304540 -0.9075789  0.4784587 -0.9236573
## med_low  -0.1169961 -0.2545465 -0.07547406 -0.5573641 -0.1467291 -0.3443799
## med_high -0.3690158  0.1669531  0.22458650  0.3982614  0.2008117  0.3992592
## high     -0.4872402  1.0172187 -0.02879709  1.0426547 -0.3490563  0.7932915
##                 dis        rad        tax     ptratio       black       lstat
## low       0.9044357 -0.7069017 -0.7184941 -0.48622424  0.37778483 -0.79288010
## med_low   0.3451873 -0.5443053 -0.4327661 -0.04043224  0.34494786 -0.13552092
## med_high -0.3649683 -0.3875674 -0.3039535 -0.34539742  0.05511372 -0.08188051
## high     -0.8645728  1.6371072  1.5133254  0.77958792 -0.72887548  0.88289000
##                 medv
## low       0.56433749
## med_low  -0.02292095
## med_high  0.24830626
## high     -0.65594629
## 
## Coefficients of linear discriminants:
##                 LD1          LD2         LD3
## zn       0.08816204  0.736492898 -0.87399691
## indus    0.03281042 -0.377914265  0.30141008
## chas    -0.08245817 -0.020798175  0.02243523
## nox      0.39552743 -0.757236690 -1.27073538
## rm      -0.08377003 -0.100208160 -0.15988838
## age      0.26652787 -0.286603080 -0.17578703
## dis     -0.04514787 -0.366842534  0.14887346
## rad      3.06321653  0.749678382 -0.14893083
## tax     -0.10934674  0.261975859  0.57412954
## ptratio  0.10690124 -0.003076552 -0.23264671
## black   -0.14032838  0.030723323  0.19847055
## lstat    0.25730873 -0.184084756  0.45779560
## medv     0.20499376 -0.392966717 -0.13135713
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9387 0.0446 0.0167
# target classes as numeric
classes <- as.numeric(train$crime)

#plot LDA biplot
plot(lda.fit, dimen = 2, col = classes, pch = classes)

From the output of our LDA model, we see that the first linear discriminant explain 94.5% of between group variance. This can be also seen from the plot where along the x axis (LD1) the separation of different groups is somewhat clearer, especially for the high category, than along the y axis (LD2).

Predictions

# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       12       9        2    0
##   med_low    5      17        4    0
##   med_high   0       8       15    0
##   high       0       0        0   30

From the table we see that our LDA model did fairly good job as the majority of predictions were correct in most categories. The med low category was the most hardest to predict right.

K-means clustering

set.seed(123)
library(MASS); library(ggplot2)
data("Boston")

# center and standardize variables
boston_scaled <- scale(Boston)
boston_scaled<-as.data.frame(boston_scaled)

# euclidean distance matrix
dist_eu <- dist(boston_scaled)

# look at the summary of the distances
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
# k-means clustering: 
km <-kmeans(boston_scaled, centers = 2)

pairs(boston_scaled, col = km$cluster)

pairs(boston_scaled[6:10], col = km$cluster)

# Investigate the optimal number of clusters with the total of within cluster sum of squares (tWCSS)

# determine the number of clusters
k_max <- 10

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')

# k-means clustering: 3 clusters
km <-kmeans(boston_scaled, centers = 3)

pairs(boston_scaled, col = km$cluster)

pairs(boston_scaled[6:10], col = km$cluster)

When we first run K-means clustering with two clusters we can see for example rad and tax variables seem have a clear effect on the clustering results. When we investigate the optimal nubmer of clusters, three clusters seem to be enough according the total of within cluster sum of squares. With three clusters again rad and tax seem to affect the clustering results.

Bonus

library(MASS)
data("Boston")
set.seed(123)

# center and standardize variables
boston_scaled <- scale(Boston)
boston_scaled<-as.data.frame(boston_scaled)

km <-kmeans(boston_scaled, centers = 3)


boston_scaled1<-data.frame(boston_scaled, km$cluster)

# linear discriminant analysis
lda.fit2 <- lda(km.cluster~., data = boston_scaled1)

# print the lda.fit object
lda.fit2
## Call:
## lda(km.cluster ~ ., data = boston_scaled1)
## 
## Prior probabilities of groups:
##         1         2         3 
## 0.2806324 0.3992095 0.3201581 
## 
## Group means:
##         crim         zn        indus        chas         nox         rm
## 1  0.9693718 -0.4872402  1.074440092 -0.02279455  1.04197430 -0.4146077
## 2 -0.3549295 -0.4039269  0.009294842  0.11748284  0.01531993 -0.2547135
## 3 -0.4071299  0.9307491 -0.953383032 -0.12651054 -0.93243813  0.6810272
##          age        dis        rad        tax     ptratio      black
## 1  0.7666895 -0.8346743  1.5010821  1.4852884  0.73584205 -0.7605477
## 2  0.3096462 -0.2267757 -0.5759279 -0.4964651 -0.09219308  0.2473725
## 3 -1.0581385  1.0143978 -0.5976310 -0.6828704 -0.53004055  0.3582008
##         lstat       medv
## 1  0.85963373 -0.6874933
## 2  0.09168925 -0.1052456
## 3 -0.86783467  0.7338497
## 
## Coefficients of linear discriminants:
##                 LD1         LD2
## crim     0.03654114  0.20373943
## zn      -0.08346821  0.34784463
## indus   -0.32262409 -0.12105014
## chas    -0.04761479 -0.13327215
## nox     -0.13026254  0.15610984
## rm       0.13267423  0.44058946
## age     -0.11936644 -0.84880847
## dis      0.23454618  0.58819732
## rad     -1.96894437  0.57933028
## tax     -1.10861600  0.53984421
## ptratio -0.13087741 -0.02004405
## black    0.15432491 -0.06106305
## lstat   -0.14002173  0.14786473
## medv     0.02559139  0.37307811
## 
## Proportion of trace:
##    LD1    LD2 
## 0.8999 0.1001
# target classes as numeric
classes <- as.numeric(boston_scaled1$km.cluster)

# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

#plot LDA biplot
plot(lda.fit2, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit2, myscale = 3)

From the plot we can see that rad and tax varaiables are mostly correlated with the first linear discriminant and dis and age with the second linear discriminant.

Super-bonus

set.seed(123)
model_predictors <- dplyr::select(train, -crime)

# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)

#Next, install and access the plotly package. Create a 3D plot (Cool!) of the columns of the matrix product by typing the code below.

library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
classes <- as.numeric(train$crime)

plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color=classes)
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.

I didn’t figure out anymore how to add clusters as colors.


Exercise 5: Dimensionality reduction techniques

date()
## [1] "Thu Dec 03 21:53:12 2020"
library(tidyr); library(dplyr); library(ggplot2); library(gmodels); library(GGally); library(corrplot); library(knitr)
## corrplot 0.84 loaded

Data description

#Load the data
human<-read.csv("data/human.csv", header=T, sep=",", row.names=1)

# explore the dataset
dim(human)
## [1] 155   8
str(human)
## 'data.frame':    155 obs. of  8 variables:
##  $ Edu2.FM  : num  1.007 0.997 0.983 0.989 0.969 ...
##  $ Labo.FM  : num  0.891 0.819 0.825 0.884 0.829 ...
##  $ Edu.Exp  : num  17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
##  $ Life.Exp : num  81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
##  $ GNI      : int  64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
##  $ Mat.Mor  : int  4 6 6 5 6 7 9 28 11 8 ...
##  $ Ado.Birth: num  7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
##  $ Parli.F  : num  39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...
ggpairs(human)

cor(human)%>%corrplot()

var(human$GNI)
## [1] 343874462
var(human$Mat.Mor)
## [1] 44854.83

Our data contains 155 observations that represent different countries and the following 8 variables:

GNI = Gross National Income per capita

Life.Exp = Life expectancy at birth

Edu.Exp = Expected years of schooling

Mat.Mor = Maternal mortality ratio

Ado.Birth = Adolescent birth rate

Parli.F = Percetange of female representatives in parliament

Edu2.FM = Proportion of females with at least secondary education / Proportion of males with at least secondary education

Labo.FM = Proportion of females in the labour force / Proportion of males in the labour force

Our data set is originally derived from the United Nations Development Programme’s Human Development Index data, more information is available here.

From the figures above we can see the distributions of variables and they associations with each other. For example the maternal mortality ratio is positively correlated with the adolescent birth rate and negatively correlated with gross national income per capita.

Principal component analysis (PCA)

Unstandardized data

pca_human <- prcomp(human)
pca_human
## Standard deviations (1, .., p=8):
## [1] 1.854416e+04 1.855219e+02 2.518701e+01 1.145441e+01 3.766241e+00
## [6] 1.565912e+00 1.912052e-01 1.591112e-01
## 
## Rotation (n x k) = (8 x 8):
##                     PC1           PC2           PC3           PC4           PC5
## Edu2.FM   -5.607472e-06  0.0006713951 -3.412027e-05 -2.736326e-04 -0.0022935252
## Labo.FM    2.331945e-07 -0.0002819357  5.302884e-04 -4.692578e-03  0.0022190154
## Edu.Exp   -9.562910e-05  0.0075529759  1.427664e-02 -3.313505e-02  0.1431180282
## Life.Exp  -2.815823e-04  0.0283150248  1.294971e-02 -6.752684e-02  0.9865644425
## GNI       -9.999832e-01 -0.0057723054 -5.156742e-04  4.932889e-05 -0.0001135863
## Mat.Mor    5.655734e-03 -0.9916320120  1.260302e-01 -6.100534e-03  0.0266373214
## Ado.Birth  1.233961e-03 -0.1255502723 -9.918113e-01  5.301595e-03  0.0188618600
## Parli.F   -5.526460e-05  0.0032317269 -7.398331e-03 -9.971232e-01 -0.0716401914
##                     PC6           PC7           PC8
## Edu2.FM    2.180183e-02  6.998623e-01  7.139410e-01
## Labo.FM    3.264423e-02  7.132267e-01 -7.001533e-01
## Edu.Exp    9.882477e-01 -3.826887e-02  7.776451e-03
## Life.Exp  -1.453515e-01  5.380452e-03  2.281723e-03
## GNI       -2.711698e-05 -8.075191e-07 -1.176762e-06
## Mat.Mor    1.695203e-03  1.355518e-04  8.371934e-04
## Ado.Birth  1.273198e-02 -8.641234e-05 -1.707885e-04
## Parli.F   -2.309896e-02 -2.642548e-03  2.680113e-03
s <- summary(pca_human)
s
## Importance of components:
##                              PC1      PC2   PC3   PC4   PC5   PC6    PC7    PC8
## Standard deviation     1.854e+04 185.5219 25.19 11.45 3.766 1.566 0.1912 0.1591
## Proportion of Variance 9.999e-01   0.0001  0.00  0.00 0.000 0.000 0.0000 0.0000
## Cumulative Proportion  9.999e-01   1.0000  1.00  1.00 1.000 1.000 1.0000 1.0000
#Draw a biplot of the principal component representation and the original variables
biplot(pca_human, choices = 1:2, cex = c(0.5, 0.6), col = c("grey80", "deeppink2") )
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
Figure 1: PCA, unscaled data: How countries differ  from each other according to HDI indicators

Figure 1: PCA, unscaled data: How countries differ from each other according to HDI indicators

Standardized data

human_std <- scale(human)

pca_human <- prcomp(human_std)
pca_human
## Standard deviations (1, .., p=8):
## [1] 2.0708380 1.1397204 0.8750485 0.7788630 0.6619563 0.5363061 0.4589994
## [8] 0.3222406
## 
## Rotation (n x k) = (8 x 8):
##                   PC1         PC2         PC3         PC4        PC5
## Edu2.FM   -0.35664370  0.03796058 -0.24223089  0.62678110 -0.5983585
## Labo.FM    0.05457785  0.72432726 -0.58428770  0.06199424  0.2625067
## Edu.Exp   -0.42766720  0.13940571 -0.07340270 -0.07020294  0.1659678
## Life.Exp  -0.44372240 -0.02530473  0.10991305 -0.05834819  0.1628935
## GNI       -0.35048295  0.05060876 -0.20168779 -0.72727675 -0.4950306
## Mat.Mor    0.43697098  0.14508727 -0.12522539 -0.25170614 -0.1800657
## Ado.Birth  0.41126010  0.07708468  0.01968243  0.04986763 -0.4672068
## Parli.F   -0.08438558  0.65136866  0.72506309  0.01396293 -0.1523699
##                   PC6         PC7         PC8
## Edu2.FM    0.17713316  0.05773644  0.16459453
## Labo.FM   -0.03500707 -0.22729927 -0.07304568
## Edu.Exp   -0.38606919  0.77962966 -0.05415984
## Life.Exp  -0.42242796 -0.43406432  0.62737008
## GNI        0.11120305 -0.13711838 -0.16961173
## Mat.Mor    0.17370039  0.35380306  0.72193946
## Ado.Birth -0.76056557 -0.06897064 -0.14335186
## Parli.F    0.13749772  0.00568387 -0.02306476
pca_human
## Standard deviations (1, .., p=8):
## [1] 2.0708380 1.1397204 0.8750485 0.7788630 0.6619563 0.5363061 0.4589994
## [8] 0.3222406
## 
## Rotation (n x k) = (8 x 8):
##                   PC1         PC2         PC3         PC4        PC5
## Edu2.FM   -0.35664370  0.03796058 -0.24223089  0.62678110 -0.5983585
## Labo.FM    0.05457785  0.72432726 -0.58428770  0.06199424  0.2625067
## Edu.Exp   -0.42766720  0.13940571 -0.07340270 -0.07020294  0.1659678
## Life.Exp  -0.44372240 -0.02530473  0.10991305 -0.05834819  0.1628935
## GNI       -0.35048295  0.05060876 -0.20168779 -0.72727675 -0.4950306
## Mat.Mor    0.43697098  0.14508727 -0.12522539 -0.25170614 -0.1800657
## Ado.Birth  0.41126010  0.07708468  0.01968243  0.04986763 -0.4672068
## Parli.F   -0.08438558  0.65136866  0.72506309  0.01396293 -0.1523699
##                   PC6         PC7         PC8
## Edu2.FM    0.17713316  0.05773644  0.16459453
## Labo.FM   -0.03500707 -0.22729927 -0.07304568
## Edu.Exp   -0.38606919  0.77962966 -0.05415984
## Life.Exp  -0.42242796 -0.43406432  0.62737008
## GNI        0.11120305 -0.13711838 -0.16961173
## Mat.Mor    0.17370039  0.35380306  0.72193946
## Ado.Birth -0.76056557 -0.06897064 -0.14335186
## Parli.F    0.13749772  0.00568387 -0.02306476
s <- summary(pca_human)
s
## Importance of components:
##                           PC1    PC2     PC3     PC4     PC5     PC6     PC7
## Standard deviation     2.0708 1.1397 0.87505 0.77886 0.66196 0.53631 0.45900
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595 0.02634
## Cumulative Proportion  0.5361 0.6984 0.79413 0.86996 0.92473 0.96069 0.98702
##                            PC8
## Standard deviation     0.32224
## Proportion of Variance 0.01298
## Cumulative Proportion  1.00000
#Draw a biplot of the principal component representation and the original variables
fig1<-biplot(pca_human, choices = 1:2, cex = c(0.5, 0.6), col = c("grey80", "deeppink2"), )
Figure 2: PCA, scaled data: How countries differ from each other according to HDI indicators

Figure 2: PCA, scaled data: How countries differ from each other according to HDI indicators

From the figures above we can see that when have conducted PCA on unstandardized and standardized data the results are quite different. When we haven’t scaled our date PC1 explains most of the variance and GNI variable is the only one that stands out. With standardized data when we have scaled the data, PC1 explains 54% of the variance and PC2 16%. There is also now more variables that explain the differences between countries as in the first model only GNI was mainly used to differentiated countries from each other.

The difference in results is due to the fact that PCA try to find the components that maximize the variance and give a lot weight on variables that have large variance. In our unscaled data there are huge differences in variance due scales the variables are measured. From the descriptive part we can see that GNI has markedly larger scale and it gets much larger values than other variables and that is why it stands out in PCA and bias our results. With scaling we can harmonize differences in the measurement scales and variances.

From the figure 2 we can see the variables (or features) that mostly affect PC1, countries with high maternal mortality ratio and adolescent birth rate are on more on the right along PC1 axis as countries with high GNI or life expectancy are on the lef. We can also say that maternal mortality ratio and adolescent birth rate are highly positively correlated as they have small angle between them on the biplot, on the other hand GNI and maternal mortality ratio are negatively correlated as they have an angle almost 180 degrees. From the plot we also see that maternal mortality ratio and percentage of female representatives in parliament are not correlated strongly as the angle between them is apprx. 90 degrees. Moreover, we see that percetange of female representatives in parliament and Labor.FM are the features that mostly affect the PC2.

When taking a closer look how observations are aligned along axis, not so surprisingly we can find Nordic countries on the top left where we would expect find countries with high income, high life expectancy and where equality between men and women is an important value.

Multiple Correspondence Analysis

library(FactoMineR)
data(tea)

# explore the dataset
dim(tea)
## [1] 300  36
str(tea)
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
##  $ frequency       : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
# column names to keep in the dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")

# select the 'keep_columns' to create a new dataset
tea_time <- select(tea, one_of(keep_columns))

gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped

# look at the summaries and structure of the data
summary(tea_time)
##         Tea         How                      how           sugar    
##  black    : 74   alone:195   tea bag           :170   No.sugar:155  
##  Earl Grey:193   lemon: 33   tea bag+unpackaged: 94   sugar   :145  
##  green    : 33   milk : 63   unpackaged        : 36                 
##                  other:  9                                          
##                   where           lunch    
##  chain store         :192   lunch    : 44  
##  chain store+tea shop: 78   Not.lunch:256  
##  tea shop            : 30                  
## 
str(tea_time)
## 'data.frame':    300 obs. of  6 variables:
##  $ Tea  : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How  : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ how  : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
mca <- MCA(tea_time, graph = FALSE) 
# summary of the model 
summary(mca) 
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6   Dim.7
## Variance               0.279   0.261   0.219   0.189   0.177   0.156   0.144
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519   7.841
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953  77.794
##                        Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.141   0.117   0.087   0.062
## % of var.              7.705   6.392   4.724   3.385
## Cumulative % of var.  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr    cos2
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139   0.003
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626   0.027
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111   0.107
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841   0.127
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979   0.035
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990   0.020
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347   0.102
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459   0.161
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968   0.478
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898   0.141
##                     v.test     Dim.3     ctr    cos2  v.test  
## black                0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            2.867 |   0.433   9.160   0.338  10.053 |
## green               -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone               -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                3.226 |   1.329  14.771   0.218   8.081 |
## milk                 2.422 |   0.013   0.003   0.000   0.116 |
## other                5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag             -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged          -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |
# visualize MCA 

plot(mca, invisible=c("ind"), habillage = "quali")

We choose 6 variables from the tea data set and take a look of the distributions of the variables. We run MCA and from the output we can see that dimension 1 explains 15% of the variance and dimension 2 14%. how (i.e. do you drink your tea with teabag) variable is mostly correlated with dimension 1. From MCA biplot we can see how variables and their categories are associated with each other. For example unpackaged (a category of how) and tea shop (a category of where) are more similar than unpackaged and chain store. Roughly we could say that unpackaged and tea shop are close to each other and tea bag and chain store are close to each other this probably repserent that people who like tea a lot buy their tea from a tea shop and drink it without teabag as those who are more casual tea drinkers buy it from a chain store and drink it with a tea bag.


Exercise 6: Analysis of longitudinal data

date()
## [1] "Thu Dec 03 21:53:19 2020"
library(tidyr); library(dplyr); library(ggplot2); library(gmodels); library(GGally); library(corrplot); library(knitr)

Chapter 8 analyses on RATS data

rats <- read.csv("data/RATSL.csv", header=T)

str(rats)
## 'data.frame':    176 obs. of  5 variables:
##  $ ID    : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ Group : int  1 1 1 1 1 1 1 1 2 2 ...
##  $ WD    : chr  "WD1" "WD1" "WD1" "WD1" ...
##  $ Weight: int  240 225 245 260 255 260 275 245 410 405 ...
##  $ Time  : int  1 1 1 1 1 1 1 1 1 1 ...
rats$ID <- factor(rats$ID)
rats$Group <- factor(rats$Group)

Individual response profiles by diet group

ggplot(rats, aes(x = Time, y = Weight, linetype = ID)) +
  geom_line() +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  theme(legend.position = "none") + 
  scale_y_continuous(limits = c(min(rats$Weight), max(rats$Weight)))

rats <- rats %>%  group_by(Time) %>% mutate(stdweight = ((Weight-mean(Weight))/sd(Weight))) %>% ungroup()

ggplot(rats, aes(x = Time, y = stdweight, linetype = ID)) +
  geom_line() +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  scale_y_continuous(name = "Standardized Weight")

From the two graphs above we can see how the weight of different rats changes over the study time in different diet groups. From the beginning Group 1 seems to differ clearly from other groups as rats in this group have much lower weight. From the both graphs we can also see the tracking phenomenon, individuals with higher starting values at the beginning will have also higher values in the end.

Mean response profiles for the three diet groups

str(rats)
## tibble [176 x 6] (S3: tbl_df/tbl/data.frame)
##  $ ID       : Factor w/ 16 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
##  $ Group    : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 2 2 ...
##  $ WD       : chr [1:176] "WD1" "WD1" "WD1" "WD1" ...
##  $ Weight   : int [1:176] 240 225 245 260 255 260 275 245 410 405 ...
##  $ Time     : int [1:176] 1 1 1 1 1 1 1 1 1 1 ...
##  $ stdweight: num [1:176] -1.001 -1.12 -0.961 -0.842 -0.882 ...
unique
## function (x, incomparables = FALSE, ...) 
## UseMethod("unique")
## <bytecode: 0x0000000012fc3ae8>
## <environment: namespace:base>
n<-rats$Time %>% unique() %>% length()
n
## [1] 11
# Summary data with mean and standard error of weight by group and time 
ratsS <- rats %>% group_by(Group, Time) %>% summarise( mean = mean(Weight), se = (sd(Weight)/sqrt(unique(ID) %>% length()))) %>% ungroup()
## `summarise()` regrouping output by 'Group' (override with `.groups` argument)
glimpse(ratsS)
## Rows: 33
## Columns: 4
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2...
## $ Time  <int> 1, 8, 15, 22, 29, 36, 43, 44, 50, 57, 64, 1, 8, 15, 22, 29, 3...
## $ mean  <dbl> 250.625, 255.000, 254.375, 261.875, 264.625, 265.000, 267.375...
## $ se    <dbl> 5.381640, 4.629100, 4.057346, 4.808614, 3.909409, 4.166190, 3...
data.frame(ratsS)
##    Group Time    mean        se
## 1      1    1 250.625  5.381640
## 2      1    8 255.000  4.629100
## 3      1   15 254.375  4.057346
## 4      1   22 261.875  4.808614
## 5      1   29 264.625  3.909409
## 6      1   36 265.000  4.166190
## 7      1   43 267.375  3.872695
## 8      1   44 267.250  3.604313
## 9      1   50 269.500  5.147815
## 10     1   57 271.500  3.803194
## 11     1   64 273.750  4.398661
## 12     2    1 453.750 34.903140
## 13     2    8 460.000 33.973029
## 14     2   15 467.500 32.945662
## 15     2   22 475.000 35.341194
## 16     2   29 482.750 35.919760
## 17     2   36 488.750 36.259194
## 18     2   43 486.500 36.314598
## 19     2   44 488.750 35.626243
## 20     2   50 501.250 37.128998
## 21     2   57 509.000 36.555893
## 22     2   64 518.500 36.854443
## 23     3    1 508.750 13.900689
## 24     3    8 506.250 14.197271
## 25     3   15 513.750 13.129959
## 26     3   22 518.250 12.270391
## 27     3   29 523.750 12.539105
## 28     3   36 529.250 12.202288
## 29     3   43 522.750 10.298665
## 30     3   44 530.000  9.201449
## 31     3   50 538.250 10.625245
## 32     3   57 542.500  8.509798
## 33     3   64 550.250  9.446119
# Plot the mean profiles
ggplot(ratsS, aes(x = Time, y = mean, linetype = Group, shape = Group)) +
  geom_line() +
  scale_linetype_manual(values = c(1,2,3)) +
  geom_point(size=3) +
  scale_shape_manual(values = c(1,2,3)) +
  geom_errorbar(aes(ymin=mean-se, ymax=mean+se, linetype="1"), width=0.3) +
  theme(legend.position = c(0.9,0.5)) +
  scale_y_continuous(name = "mean(Weight) +/- se(Weight)")

rats$Time <- factor(rats$Time)

str(rats)
## tibble [176 x 6] (S3: tbl_df/tbl/data.frame)
##  $ ID       : Factor w/ 16 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
##  $ Group    : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 2 2 ...
##  $ WD       : chr [1:176] "WD1" "WD1" "WD1" "WD1" ...
##  $ Weight   : int [1:176] 240 225 245 260 255 260 275 245 410 405 ...
##  $ Time     : Factor w/ 11 levels "1","8","15","22",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ stdweight: num [1:176] -1.001 -1.12 -0.961 -0.842 -0.882 ...
ggplot(rats, aes(x = Time, y = Weight, fill = Group)) +
  geom_boxplot()

rats$Time <- as.integer(rats$Time)

str(rats)
## tibble [176 x 6] (S3: tbl_df/tbl/data.frame)
##  $ ID       : Factor w/ 16 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
##  $ Group    : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 2 2 ...
##  $ WD       : chr [1:176] "WD1" "WD1" "WD1" "WD1" ...
##  $ Weight   : int [1:176] 240 225 245 260 255 260 275 245 410 405 ...
##  $ Time     : int [1:176] 1 1 1 1 1 1 1 1 1 1 ...
##  $ stdweight: num [1:176] -1.001 -1.12 -0.961 -0.842 -0.882 ...

After we have calculated means and standard errors by groups for each point, we can see from the plot that groups 2 and 3 are close to each other and their SEs seem to cross, so maybe the difference in diet for there groups does not have affect how weight changes. Group 1 again differ from other two group with markedly lower values. Group 1 also has smaller SEs as there is more observations in this group.

Boxplots of mean summary measures for the threet groups, with and without outliers

# Create a summary data by treatment and subject with mean as the summary variable (ignoring baseline).
ratS1 <- rats  %>%
  filter(Time>1) %>%
  group_by(Group, ID) %>%
  summarise( mean=mean(Weight) ) %>%
  ungroup()
## `summarise()` regrouping output by 'Group' (override with `.groups` argument)
# Glimpse the data
glimpse(ratS1)
## Rows: 16
## Columns: 3
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3
## $ ID    <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
## $ mean  <dbl> 263.2, 238.9, 261.7, 267.2, 270.9, 276.2, 274.6, 267.5, 443.9...
ratS1
## # A tibble: 16 x 3
##    Group ID     mean
##    <fct> <fct> <dbl>
##  1 1     1      263.
##  2 1     2      239.
##  3 1     3      262.
##  4 1     4      267.
##  5 1     5      271.
##  6 1     6      276.
##  7 1     7      275.
##  8 1     8      268.
##  9 2     9      444.
## 10 2     10     458.
## 11 2     11     456.
## 12 2     12     594 
## 13 3     13     495.
## 14 3     14     536.
## 15 3     15     542.
## 16 3     16     536.
# Draw a boxplot of the mean versus treatment
ggplot(ratS1, aes(x = Group, y = mean)) +
  geom_boxplot() +
  stat_summary(fun.y = "mean", geom = "point", shape=23, size=4, fill = "white") +
  scale_y_continuous(name = "mean(Weight)")+labs(title="With outliers")+theme(
  plot.title = element_text(hjust = 0.5))
## Warning: `fun.y` is deprecated. Use `fun` instead.

# Create a new data by filtering the outlier and adjust the ggplot code the draw the plot again with the new data
ratS2 <- filter(ratS1, mean>250 & mean<550)
ratS2 <- filter(ratS2, mean!=495.2)


ggplot(ratS2, aes(x = Group, y = mean)) +
  geom_boxplot() +
  stat_summary(fun.y = "mean", geom = "point", shape=23, size=4, fill = "white") +
  scale_y_continuous(name = "mean(Weight)")+labs(title="Without outliers")+theme(
  plot.title = element_text(hjust = 0.5))
## Warning: `fun.y` is deprecated. Use `fun` instead.

After we have plot the mean of weight for each group (excluding the baseline weight), we can see that again groups 2 and 3 are closer to each other than Group 1. Also evert group seem to have an outlier. After excluding extreme values (e.g. outliers for each group) we can see that the IQRs of Group 2 and 3 are much more narrow now than with the outlier, but IQR of Group 1 does not change much. The means are pretty much the same in both graphs. There also seem to be a clear difference between the means.

Statistical tests on mean summary measures

### One way ANOVA to see if there is a difference between groups (with data without outliers)

fit1<-aov(mean ~ Group, ratS2)
summary(fit1)
##             Df Sum Sq Mean Sq F value   Pr(>F)    
## Group        2 176917   88458    2836 1.69e-14 ***
## Residuals   10    312      31                     
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
TukeyHSD(fit1,"Group")
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = mean ~ Group, data = ratS2)
## 
## $Group
##          diff       lwr       upr p adj
## 2-1 183.64286 173.07885 194.20686     0
## 3-1 269.50952 258.94552 280.07353     0
## 3-2  85.86667  73.36717  98.36617     0
### Adding the baseline and testing the difference between groups adjusted for the baseline (with data with outliers so we can add the baseline column)

RATS <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/MABS/master/Examples/data/rats.txt", sep="\t", header=T)

str(RATS)
## 'data.frame':    16 obs. of  13 variables:
##  $ ID   : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ Group: int  1 1 1 1 1 1 1 1 2 2 ...
##  $ WD1  : int  240 225 245 260 255 260 275 245 410 405 ...
##  $ WD8  : int  250 230 250 255 260 265 275 255 415 420 ...
##  $ WD15 : int  255 230 250 255 255 270 260 260 425 430 ...
##  $ WD22 : int  260 232 255 265 270 275 270 268 428 440 ...
##  $ WD29 : int  262 240 262 265 270 275 273 270 438 448 ...
##  $ WD36 : int  258 240 265 268 273 277 274 265 443 460 ...
##  $ WD43 : int  266 243 267 270 274 278 276 265 442 458 ...
##  $ WD44 : int  266 244 267 272 273 278 271 267 446 464 ...
##  $ WD50 : int  265 238 264 274 276 284 282 273 456 475 ...
##  $ WD57 : int  272 247 268 273 278 279 281 274 468 484 ...
##  $ WD64 : int  278 245 269 275 280 281 284 278 478 496 ...
RATS$ID <- factor(RATS$ID)
RATS$Group <- factor(RATS$Group)

# Add the baseline from the original data as a new variable to the summary data
ratS3 <- ratS1 %>%
  mutate(baseline = RATS$WD1)

str(ratS3)
## tibble [16 x 4] (S3: tbl_df/tbl/data.frame)
##  $ Group   : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 2 2 ...
##  $ ID      : Factor w/ 16 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
##  $ mean    : num [1:16] 263 239 262 267 271 ...
##  $ baseline: int [1:16] 240 225 245 260 255 260 275 245 410 405 ...
# Fit the linear model with the mean as the response and compute the analysis of covariance table for the fitted model with anova()
fit <- lm(mean ~ baseline + Group, data = ratS3)

summary(fit)
## 
## Call:
## lm(formula = mean ~ baseline + Group, data = ratS3)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -23.905  -4.194   2.190   7.577  14.800 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 33.16375   21.87657   1.516   0.1554    
## baseline     0.92513    0.08572  10.793 1.56e-07 ***
## Group2      34.85753   18.82308   1.852   0.0888 .  
## Group3      23.67526   23.25324   1.018   0.3287    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 11.68 on 12 degrees of freedom
## Multiple R-squared:  0.9936, Adjusted R-squared:  0.992 
## F-statistic: 622.1 on 3 and 12 DF,  p-value: 1.989e-13
anova(fit)
## Analysis of Variance Table
## 
## Response: mean
##           Df Sum Sq Mean Sq   F value   Pr(>F)    
## baseline   1 253625  253625 1859.8201 1.57e-14 ***
## Group      2    879     439    3.2219  0.07586 .  
## Residuals 12   1636     136                       
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
fit2<-aov(mean ~ baseline + Group, data=ratS3)
summary(fit2)
##             Df Sum Sq Mean Sq  F value   Pr(>F)    
## baseline     1 253625  253625 1859.820 1.57e-14 ***
## Group        2    879     439    3.222   0.0759 .  
## Residuals   12   1636     136                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
TukeyHSD(fit2,"Group")
## Warning in replications(paste("~", xx), data = mf): non-factors ignored:
## baseline
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = mean ~ baseline + Group, data = ratS3)
## 
## $Group
##           diff        lwr       upr     p adj
## 2-1  12.806026  -6.272296 31.884347 0.2140553
## 3-1  -4.347112 -23.425434 14.731209 0.8185859
## 3-2 -17.153138 -39.182886  4.876611 0.1364887

As we have now three groups we can’t run t-test so we use one-way ANOVA to see if there is a difference between groups.With the data that do not include outliers we see that there is a significant difference between groups and post-hoc Tukey-HSD test shows in pairwise comparison there is significant difference between every pair. However, after we add the baseline variable to our model, the baseline is significantly related to weight but the group variable is not significant and in the pairwise comparison we see that there is no significant difference between any pair. So we can conclude that the type of diet which rats followed do not explain the difference in mean weight measures when we had adjusted the analyses for the baseline values.

Chapter 9 analyses on BPRS data

BPRS <- read.csv("data/BPRSL.csv", header=T)

str(BPRS)
## 'data.frame':    360 obs. of  5 variables:
##  $ treatment: int  1 1 1 1 1 1 1 1 1 1 ...
##  $ subject  : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ weeks    : chr  "week0" "week0" "week0" "week0" ...
##  $ bprs     : int  42 58 54 55 72 48 71 30 41 57 ...
##  $ week     : int  0 0 0 0 0 0 0 0 0 0 ...
BPRS$subject <- factor(BPRS$subject)
BPRS$treatment <- factor(BPRS$treatment)
library(ggplot2)

ggplot(BPRS, aes(x = week, y = bprs, colour = subject, linetype = treatment)) +
  geom_line()

BPRS_w <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/MABS/master/Examples/data/BPRS.txt", sep=" ", header=TRUE)

str(BPRS_w)
## 'data.frame':    40 obs. of  11 variables:
##  $ treatment: int  1 1 1 1 1 1 1 1 1 1 ...
##  $ subject  : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ week0    : int  42 58 54 55 72 48 71 30 41 57 ...
##  $ week1    : int  36 68 55 77 75 43 61 36 43 51 ...
##  $ week2    : int  36 61 41 49 72 41 47 38 39 51 ...
##  $ week3    : int  43 55 38 54 65 38 30 38 35 55 ...
##  $ week4    : int  41 43 43 56 50 36 27 31 28 53 ...
##  $ week5    : int  40 34 28 50 39 29 40 26 22 43 ...
##  $ week6    : int  38 28 29 47 32 33 30 26 20 43 ...
##  $ week7    : int  47 28 25 42 38 27 31 25 23 39 ...
##  $ week8    : int  51 28 24 46 32 25 31 24 21 32 ...
pairs(BPRS_w[,3:11], pch = 19)

After plotting bprs over study time there seem not be a clear difference between solid and dashed lines i.e no difference between the treatment groups. From the scatterplot matrix graph we can see that there are clear patterns and as we have longitudinal data measured from same individuals at different time points, the plot demonstrates that repeated measures are not independent of one another.

# Create a basic linear model
bprs_reg <- lm(bprs ~ week + treatment, data = BPRS)

# print out a summary of the model
summary(bprs_reg)
## 
## Call:
## lm(formula = bprs ~ week + treatment, data = BPRS)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -22.454  -8.965  -3.196   7.002  50.244 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  46.4539     1.3670  33.982   <2e-16 ***
## week         -2.2704     0.2524  -8.995   <2e-16 ***
## treatment2    0.5722     1.3034   0.439    0.661    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 12.37 on 357 degrees of freedom
## Multiple R-squared:  0.1851, Adjusted R-squared:  0.1806 
## F-statistic: 40.55 on 2 and 357 DF,  p-value: < 2.2e-16
library(lme4)
## Loading required package: Matrix
## 
## Attaching package: 'Matrix'
## The following objects are masked from 'package:tidyr':
## 
##     expand, pack, unpack
# Create a random intercept model
bprs_ref <- lmer(bprs ~ week + treatment + (1 | subject), data = BPRS, REML = FALSE)

# Print the summary of the model
summary(bprs_ref)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week + treatment + (1 | subject)
##    Data: BPRS
## 
##      AIC      BIC   logLik deviance df.resid 
##   2748.7   2768.1  -1369.4   2738.7      355 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.0481 -0.6749 -0.1361  0.4813  3.4855 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  subject  (Intercept)  47.41    6.885  
##  Residual             104.21   10.208  
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  46.4539     1.9090  24.334
## week         -2.2704     0.2084 -10.896
## treatment2    0.5722     1.0761   0.532
## 
## Correlation of Fixed Effects:
##            (Intr) week  
## week       -0.437       
## treatment2 -0.282  0.000
# Create a random intercept and random slope model
bprs_ref1 <- lmer(bprs ~ week + treatment + (week | subject), data = BPRS, REML = FALSE)

# print a summary of the model
summary(bprs_ref1)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week + treatment + (week | subject)
##    Data: BPRS
## 
##      AIC      BIC   logLik deviance df.resid 
##   2745.4   2772.6  -1365.7   2731.4      353 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.8919 -0.6194 -0.0691  0.5531  3.7976 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr 
##  subject  (Intercept) 64.8222  8.0512        
##           week         0.9609  0.9802   -0.51
##  Residual             97.4305  9.8707        
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  46.4539     2.1052  22.066
## week         -2.2704     0.2977  -7.626
## treatment2    0.5722     1.0405   0.550
## 
## Correlation of Fixed Effects:
##            (Intr) week  
## week       -0.582       
## treatment2 -0.247  0.000
# perform an ANOVA test on the two models
anova(bprs_ref1, bprs_ref)
## Data: BPRS
## Models:
## bprs_ref: bprs ~ week + treatment + (1 | subject)
## bprs_ref1: bprs ~ week + treatment + (week | subject)
##           npar    AIC    BIC  logLik deviance  Chisq Df Pr(>Chisq)  
## bprs_ref     5 2748.7 2768.1 -1369.4   2738.7                       
## bprs_ref1    7 2745.4 2772.6 -1365.7   2731.4 7.2721  2    0.02636 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Create a random intercept and random slope model with the interaction
bprs_ref2 <- lmer(bprs ~ week + treatment + week*treatment+ (week | subject), data = BPRS, REML = FALSE)

# print a summary of the model
summary(bprs_ref2)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week + treatment + week * treatment + (week | subject)
##    Data: BPRS
## 
##      AIC      BIC   logLik deviance df.resid 
##   2744.3   2775.4  -1364.1   2728.3      352 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.0512 -0.6271 -0.0768  0.5288  3.9260 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr 
##  subject  (Intercept) 64.9964  8.0620        
##           week         0.9687  0.9842   -0.51
##  Residual             96.4707  9.8220        
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##                 Estimate Std. Error t value
## (Intercept)      47.8856     2.2521  21.262
## week             -2.6283     0.3589  -7.323
## treatment2       -2.2911     1.9090  -1.200
## week:treatment2   0.7158     0.4010   1.785
## 
## Correlation of Fixed Effects:
##             (Intr) week   trtmn2
## week        -0.650              
## treatment2  -0.424  0.469       
## wek:trtmnt2  0.356 -0.559 -0.840
# perform an ANOVA test on the two models
anova(bprs_ref2, bprs_ref1)
## Data: BPRS
## Models:
## bprs_ref1: bprs ~ week + treatment + (week | subject)
## bprs_ref2: bprs ~ week + treatment + week * treatment + (week | subject)
##           npar    AIC    BIC  logLik deviance  Chisq Df Pr(>Chisq)  
## bprs_ref1    7 2745.4 2772.6 -1365.7   2731.4                       
## bprs_ref2    8 2744.3 2775.4 -1364.1   2728.3 3.1712  1    0.07495 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# draw the plot of RATSL with the observed Weight values
ggplot(BPRS, aes(x = week, y = bprs, colour = subject)) +
  geom_line(aes(linetype = treatment)) +
  scale_x_continuous(name = "Weeks") +
  scale_y_continuous(name = "BPRS") +
  theme(legend.position = "top")

# Create a vector of the fitted values
Fitted <- fitted(bprs_ref2)

# Create a new column fitted to RATSL
BPRS <- BPRS %>% mutate(fit=Fitted %>% as.numeric())

# draw the plot of RATSL with the Fitted values of weight
ggplot(BPRS, aes(x = week, y = fit, colour = subject)) +
  geom_line(aes(linetype = treatment)) +
  scale_x_continuous(name = "Weeks") +
  scale_y_continuous(name = "Fitted BPRS") +
  theme(legend.position = "top")

After we have fitted a basic linear model, we can see that there seem to be no treatment effect as the coefficient for treatment is not significant, however the week variable seem to be significantly related to BSPR.

Next we fit a random intercept model that allows the linear regression fits for each individual differ in intercept. The estimates results are quite similar as in the basic linear model with some small difference in the standard errors. Again, there is no evidence of the treatment effect. The estimated variance for subject random effects is not quite large so there is probably not much variation in intercepts of the regressions fits of the individual BSPR changes.

We proceed to fit a random intercept and random slope model. Now we also allow the regression fits for each individual differ in slope. Again there is no evidence of treatment effect, the estimates are again similar to previous models. However, according the likelihood ratio test the random intercept and slope model is a better fit than the random intercept model (p-value 0.026 ).

Finally we fit a random intercept and random slope model with week*treatment interaction term. Now the estimates are somewhat diffrent. However, coefficients for the treatment variable or for the interaction term are not statistically significant. Also, when we compare the random intercept and slope model with interaction term to a model without interaction term, it seems that the interaction model is no better fit than the model without the interaction term.

Plotting the fitted values also shows that it is very hard to see any difference between the two tretment groups.